Good. Now, it's a real pity that you didn't look at this because that would have made
it clear that even informed search, even a star search with a good heuristic is much
too slow for many applications. And the problem, the most virulent problem being that you have
to remember too much. Which means that in certain situations you want to do something
else because you're never going to penetrate your search tree far enough. The real difference
is or what you're losing that way is you're losing systematic behavior. So we're calling
a search procedure systematic if it eventually visits all nodes in the search tree given
enough time, given exponential time. The good thing about systematic search algorithms is
that they're complete except when they're not which is a very small set. You have to
engineer something to actually make them incomplete but usually they're complete. And the other
problem is since they are complete we cannot pose a bound on how many nodes we have to
keep in memory. We cannot bound space complexity which is a problem which actually hinders
the whole search thing in search spaces where we have lots of nodes. Think like chess. We
have too many nodes to actually deal with. But chess is easy. Many problems are just
much worse than chess where you have even more nodes. And don't say go because go is
also easy. If you do path finding in a thousand dimensional space with interesting barriers
in it then kaboom. How do you deal with those kind of things? And the alternative you have
is to just basically say okay we're not going to be systematic. That has problems. Of course
you might actually get into loops and not know it and those kind of things. But in certain
situations that's the best you can do. And in certain classes of problems that's all
you need. For instance in configuration problems you probably all know the eight queens problem.
It's a classic. The question is can I put eight queens onto a chess board so that they
don't attack each other? And the answer is yes. And you can just try. Put a queen into
the leftmost square and then here's out, there's out, there's out obviously. So the first queen
you can put is here for instance. You do it again, you do it again, you do it again, you
do it again and now the question is what do I do now? Because this here still works. Why
is there no queen here? But that one doesn't work anymore. You fail in the last step because
boom. The answer is even though you're not finding it directly with the simplest most
thing there are actually solutions and there are either you can say quite a few. I think
there's somewhere around 60 or so you can look it up. But not that many. How do you
deal with this? This is what we call a configuration problem. We have a search process involved
but we're not actually interested in the path to the solution. We're only interested in
which goal state we actually arrive in. So for configuration problems we often want to
use what we call local search algorithms which really operate on a single state. And we're
not actually interested in the path to the state, just the state itself. Is there any
in this huge search space, is there any configuration that meets my standards? And that's not going
to be systematic because we're not remembering any past states. But we have lots of very
important applications. Integrated circuit design for instance. How do I wriggle all
these lines onto a circuit board so that they either don't cross or cross only minimally
often or those kind of things. Factory layout. You have a bunch of machines and a bunch of
conveyor belts between them. How do you put them in your building so that your goods in
production actually take the least time? Because time is money. Portfolio management. If you
have a bunch of stocks, how do you place them so that you're safest against fluctuations,
fleet deployment, whatever. Those kind of problems are actually done by local search
algorithms and we're going to look at them at least from a qualitative point of view.
And the algorithms we're going to use for that are what we often call iterative improvement
algorithms. Which just means you start with a random state, pick one, and then you try
to, then you look at all the neighboring states for better states and then kind of do a research
over that. If you think of the traveling salesman's problem, which I'm sure you know, then you
actually look at a random state and then you make it better by kind of re-by changing little
Presenters
Zugänglich über
Offener Zugang
Dauer
00:20:11 Min
Aufnahmedatum
2020-10-28
Hochgeladen am
2020-10-28 10:37:02
Sprache
en-US
Explanation of Local Search and Hill-Climbing.